Goto

Collaborating Authors

 dilemma zone


Roundabout Dilemma Zone Data Mining and Forecasting with Trajectory Prediction and Graph Neural Networks

Satish, Manthan Chelenahalli, Lu, Duo, Chakravarthi, Bharatesh, Farhadi, Mohammad, Yang, Yezhou

arXiv.org Artificial Intelligence

Traffic roundabouts, as complex and critical road scenarios, pose significant safety challenges for autonomous vehicles. In particular, the encounter of a vehicle with a dilemma zone (DZ) at a roundabout intersection is a pivotal concern. This paper presents an automated system that leverages trajectory forecasting to predict DZ events, specifically at traffic roundabouts. Our system aims to enhance safety standards in both autonomous and manual transportation. The core of our approach is a modular, graph-structured recurrent model that forecasts the trajectories of diverse agents, taking into account agent dynamics and integrating heterogeneous data, such as semantic maps. This model, based on graph neural networks, aids in predicting DZ events and enhances traffic management decision-making. We evaluated our system using a real-world dataset of traffic roundabout intersections. Our experimental results demonstrate that our dilemma forecasting system achieves a high precision with a low false positive rate of 0.1. This research represents an advancement in roundabout DZ data mining and forecasting, contributing to the assurance of intersection safety in the era of autonomous vehicles.


Investigating Personalized Driving Behaviors in Dilemma Zones: Analysis and Prediction of Stop-or-Go Decisions

Qin, Ziye, Li, Siyan, Wu, Guoyuan, Barth, Matthew J., Abdelraouf, Amr, Gupta, Rohit, Han, Kyungtae

arXiv.org Artificial Intelligence

Dilemma zones at signalized intersections present a commonly occurring but unsolved challenge for both drivers and traffic operators. Onsets of the yellow lights prompt varied responses from different drivers: some may brake abruptly, compromising the ride comfort, while others may accelerate, increasing the risk of red-light violations and potential safety hazards. Such diversity in drivers' stop-or-go decisions may result from not only surrounding traffic conditions, but also personalized driving behaviors. To this end, identifying personalized driving behaviors and integrating them into advanced driver assistance systems (ADAS) to mitigate the dilemma zone problem presents an intriguing scientific question. In this study, we employ a game engine-based (i.e., CARLA-enabled) driving simulator to collect high-resolution vehicle trajectories, incoming traffic signal phase and timing information, and stop-or-go decisions from four subject drivers in various scenarios. This approach allows us to analyze personalized driving behaviors in dilemma zones and develop a Personalized Transformer Encoder to predict individual drivers' stop-or-go decisions. The results show that the Personalized Transformer Encoder improves the accuracy of predicting driver decision-making in the dilemma zone by 3.7% to 12.6% compared to the Generic Transformer Encoder, and by 16.8% to 21.6% over the binary logistic regression model.


Both eyes open: Vigilant Incentives help Regulatory Markets improve AI Safety

Bova, Paolo, Di Stefano, Alessandro, Han, The Anh

arXiv.org Artificial Intelligence

In the context of rapid discoveries by leaders in AI, governments must consider how to design regulation that matches the increasing pace of new AI capabilities. Regulatory Markets for AI is a proposal designed with adaptability in mind. It involves governments setting outcome-based targets for AI companies to achieve, which they can show by purchasing services from a market of private regulators. We use an evolutionary game theory model to explore the role governments can play in building a Regulatory Market for AI systems that deters reckless behaviour. We warn that it is alarmingly easy to stumble on incentives which would prevent Regulatory Markets from achieving this goal. These 'Bounty Incentives' only reward private regulators for catching unsafe behaviour. We argue that AI companies will likely learn to tailor their behaviour to how much effort regulators invest, discouraging regulators from innovating. Instead, we recommend that governments always reward regulators, except when they find that those regulators failed to detect unsafe behaviour that they should have. These 'Vigilant Incentives' could encourage private regulators to find innovative ways to evaluate cutting-edge AI systems.


To Regulate or Not: A Social Dynamics Analysis of an Idealised AI Race

Han, The Anh | Moniz Pereira, Luis (Universidade Nova de Lisboa) | Santos, Francisco C. (NESC-ID and Instituto Superior Tecnico, Universidade de Lisboa) | Lenaerts, Tom (Machine Learning Group, Universite Libre de Bruxelles)

Journal of Artificial Intelligence Research

Rapid technological advancements in Artificial Intelligence (AI), as well as the growing deployment of intelligent technologies in new application domains, have generated serious anxiety and a fear of missing out among different stake-holders, fostering a racing narrative. Whether real or not, the belief in such a race for domain supremacy through AI, can make it real simply from its consequences, as put forward by the Thomas theorem. These consequences may be negative, as racing for technological supremacy creates a complex ecology of choices that could push stake-holders to underestimate or even ignore ethical and safety procedures. As a consequence, different actors are urging to consider both the normative and social impact of these technological advancements, contemplating the use of the precautionary principle in AI innovation and research. Yet, given the breadth and depth of AI and its advances, it is difficult to assess which technology needs regulation and when. As there is no easy access to data describing this alleged AI race, theoretical models are necessary to understand its potential dynamics, allowing for the identification of when procedures need to be put in place to favour outcomes beneficial for all. We show that, next to the risks of setbacks and being reprimanded for unsafe behaviour, the time-scale in which domain supremacy can be achieved plays a crucial role. When this can be achieved in a short term, those who completely ignore the safety precautions are bound to win the race but at a cost to society, apparently requiring regulatory actions. Our analysis reveals that imposing regulations for all risk and timing conditions may not have the anticipated effect as only for specific conditions a dilemma arises between what is individually preferred and globally beneficial. Similar observations can be made for the long-term development case. Yet different from the short-term situation, conditions can be identified that require the promotion of risk-taking as opposed to compliance with safety regulations in order to improve social welfare. These results remain robust both when two or several actors are involved in the race and when collective rather than individual setbacks are produced by risk-taking behaviour. When defining codes of conduct and regulatory policies for applications of AI, a clear understanding of the time-scale of the race is thus required, as this may induce important non-trivial effects. This article is part of the special track on AI and Society.  


A Review on Drivers Red Light Running and Turning Behaviour Prediction

Komol, Md Mostafizur Rahman, Elhenawy, Mohammed, Yasmin, Shamsunnahar, Masoud, Mahmoud, Rakotonirainy, Andry

arXiv.org Artificial Intelligence

Every year, around 1.3 million people all over the world are killed by road mishaps with approximately 20 to 50 million life-threatening injuries(International Transport Forum, 2018; World Health Organisation, 2018). Notwithstanding, there is a disparity in road traffic death from 9.3 to 26.6 per 100,000 population among countries based on their income level, while the global rate is still 18.2 per 100,000 population (World Health Organisation, 2018). Moreover, traffic collision at intersections is a significant threat to upholding road safety. As a whole, 45% of severe injuries occur at intersections, including 22% of fatal crashes (Li, Jia, et al., 2016). Drivers often inadvertently fail to break immediately at the onset of red light or deliberately run through the red light signal and also miscalculate the motif of the right angle vehicle [in a right-hand driving condition] while crossing the intersection (Zhang et al., 2018). Especially at the onset of yellow signal, drivers get confused with decision measurement either to stop or to run and to get involved in rear-end collision or right-angle collision or uncomfortable hard brake, often resulting in injuries or death (Gazis et al., 1960; Majhi & Senathipathi, 2019).